Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
translated by 谷歌翻译
Reference-based Super-resolution (RefSR) approaches have recently been proposed to overcome the ill-posed problem of image super-resolution by providing additional information from a high-resolution image. Multi-reference super-resolution extends this approach by allowing more information to be incorporated. This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references. Extensive experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
translated by 谷歌翻译
在使用自然语言进行交流时保持匿名仍然是一个挑战。即使候选人的数量很高,分析候选人作者的写作风格的标准作者归因技术也达到了不舒服的精度。对抗性风格测定法可以防止作者归因,目的是防止不必要的副词。本文在针对作者身份归因的防御性研究中重现并复制实验(Brennan等,2012)。尽管我们得出结论,由于原始研究中缺乏对照组,我们得出的结论是,我们能够成功地复制和复制原始结果。在我们的复制中,我们发现了新的证据表明,一种完全自动的方法,往返翻译,值得重新检查,因为它似乎降低了已建立的作者归因方法的有效性。
translated by 谷歌翻译
对长尾分布式数据进行分类是一个具有挑战性的问题,这遭受严重的类别 - 不平衡,因此尤其是尾班的表现不妥协。最近,基于合奏的方法实现了最先进的性能并表现出极大的潜力。但是,目前方法有两个限制。首先,他们的预测对失败敏感的应用不值得信赖。这对于错误预测基本频繁的尾部类特别有害。其次,他们将统一数量的专家分配给所有样本,这对于具有过多计算成本的易于样本是多余的。为了解决这些问题,我们提出了一种值得信赖的长尾分类(TLC)方法,共同进行分类和不确定性估计,以识别多专家框架中的硬样品。我们的TLC获得了基于证据的不确定性(EVU)和每个专家的证据,然后在Dempster-Shafer证据理论(DST)下结合这些不确定性和证据。此外,我们提出了一种充满活力的专家参与,以减少易于采样的专家数量,以实现效率,同时保持有前途的表现。最后,我们对分类,尾检测,ood检测和故障预测的任务进行了全面的实验。实验结果表明,所提出的TLC优于最先进的方法,可靠地具有可靠的不确定性。
translated by 谷歌翻译